87 research outputs found

    OpenSPIM - an open access platform for light sheet microscopy

    Full text link
    Light sheet microscopy promises to revolutionize developmental biology by enabling live in toto imaging of entire embryos with minimal phototoxicity. We present detailed instructions for building a compact and customizable Selective Plane Illumination Microscopy (SPIM) system. The integrated OpenSPIM hardware and software platform is shared with the scientific community through a public website, thereby making light sheet microscopy accessible for widespread use and optimization to various applications.Comment: 7 pages, 3 figures, 6 supplementary videos, submitted to Nature Methods, associated public website http://openspim.or

    ImageJ2: ImageJ for the next generation of scientific image data

    Full text link
    ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs

    The Virtual Insect Brain protocol: creating and comparing standardized neuroanatomy

    Get PDF
    BACKGROUND: In the fly Drosophila melanogaster, new genetic, physiological, molecular and behavioral techniques for the functional analysis of the brain are rapidly accumulating. These diverse investigations on the function of the insect brain use gene expression patterns that can be visualized and provide the means for manipulating groups of neurons as a common ground. To take advantage of these patterns one needs to know their typical anatomy. RESULTS: This paper describes the Virtual Insect Brain (VIB) protocol, a script suite for the quantitative assessment, comparison, and presentation of neuroanatomical data. It is based on the 3D-reconstruction and visualization software Amira, version 3.x (Mercury Inc.) [1]. Besides its backbone, a standardization procedure which aligns individual 3D images (series of virtual sections obtained by confocal microscopy) to a common coordinate system and computes average intensities for each voxel (volume pixel) the VIB protocol provides an elaborate data management system for data administration. The VIB protocol facilitates direct comparison of gene expression patterns and describes their interindividual variability. It provides volumetry of brain regions and helps to characterize the phenotypes of brain structure mutants. Using the VIB protocol does not require any programming skills since all operations are carried out at an intuitively usable graphical user interface. Although the VIB protocol has been developed for the standardization of Drosophila neuroanatomy, the program structure can be used for the standardization of other 3D structures as well. CONCLUSION: Standardizing brains and gene expression patterns is a new approach to biological shape and its variability. The VIB protocol provides a first set of tools supporting this endeavor in Drosophila. The script suite is freely available at [2

    A high-level 3D visualization API for Java and ImageJ

    Get PDF
    BACKGROUND: Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. RESULTS: Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. CONCLUSIONS: Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de

    Crowdsourcing the creation of image segmentation algorithms for connectomics

    Get PDF
    To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This “deep learning” approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge

    TrakEM2 Software for Neural Circuit Reconstruction

    Get PDF
    A key challenge in neuroscience is the expeditious reconstruction of neuronal circuits. For model systems such as Drosophila and C. elegans, the limiting step is no longer the acquisition of imagery but the extraction of the circuit from images. For this purpose, we designed a software application, TrakEM2, that addresses the systematic reconstruction of neuronal circuits from large electron microscopical and optical image volumes. We address the challenges of image volume composition from individual, deformed images; of the reconstruction of neuronal arbors and annotation of synapses with fast manual and semi-automatic methods; and the management of large collections of both images and annotations. The output is a neural circuit of 3d arbors and synapses, encoded in NeuroML and other formats, ready for analysis

    There is no such thing as ‘undisturbed’ soil and sediment sampling: sampler-induced deformation of salt marsh sediments revealed by 3D X-ray computed tomography

    Get PDF
    Purpose: Within most environmental contexts, the collection of 'undisturbed' samples is widely relied-upon in studies of soil and sediment properties and structure. However, the impact of sampler-induced disturbance is rarely acknowledged, despite the potential significance of modification to sediment structure for the robustness of data interpretation. In this study, 3D-computed X-ray microtomography (μCT) is used to evaluate and compare the disturbance imparted by four commonly-used sediment sampling methods within a coastal salt-marsh. Materials and methods: Paired sediment core samples from a restored salt-marsh at Orplands Farm, Essex, UK were collected using four common sampling methods (push, cut, hammer and gouge methods). Sampling using two different area-ratio cores resulted in a total of 16 cores that were scanned using 3D X-Ray computed tomography, to identify and evaluate sediment structural properties of samples that can be attributed to sampling method. Results and discussion: 3D qualitative analysis identifies a suite of sampling-disturbance structures including gross-scale changes to sediment integrity and substantial modification of pore-space, structure and distribution, independent of sediment strength and stiffness. Quantitative assessment of changes to pore-space and sediment density arising from the four sampling methods offer a means of direct comparison between the impact of depth-sampling methods. Considerable disturbance to samples result from use of push, hammer and auguring samplers, whilst least disturbance is found in samples recovered by cutting and advanced trimming approaches. Conclusions: It is evident that with the small-bore tubes and samplers commonly used in environmental studies, all techniques result in disturbance to sediment structure to a far greater extent than previously reported, revealed by μCT. This work identifies and evaluates for the first time the full nature, extent and significance of internal sediment disturbance arising from common sampling methods

    Das Standardgehirn von Drosophila melanogaster und seine automatische Segmentierung

    No full text
    In this thesis, I introduce the Virtual Brain Protocol, which facilitates applications of the Standard Brain of Drosophila melanogaster. By providing reliable and extensible tools for the handling of neuroanatomical data, this protocol simplifies and organizes the recurring tasks involved in these applications. It is demonstrated that this protocol can also be used to generate average brains, i.e. to combine recordings of several brains with the same features such that the common features are emphasized. One of the most important steps of the Virtual Insect Protocol is the aligning of newly recorded data sets with the Standard Brain. After presenting methods commonly applied in a biological or medical context to align two different recordings, it is evaluated to what extent this alignment can be automated. To that end, existing Image Processing techniques are assessed. I demonstrate that these techniques do not satisfy the requirements needed to guarantee sensible alignments between two brains. Then, I analyze what needs to be taken into account in order to formulate an algorithm which satisfies the needs of the protocol. In the last chapter, I derive such an algorithm using methods from Information Theory, which bases the technique on a solid mathematical foundation. I show how Bayesian Inference can be applied to enhance the results further. It is demonstrated that this approach yields good results on very noisy images, detecting apparent boundaries between structures. The same approach can be extended to take additional knowledge into account, e.g. the relative position of the anatomical structures and their shape. It is shown how this extension can be utilized to segment a newly recorded brain automatically.In dieser Arbeit wird das Virtual Brain Protocol vorgestellt, das die Anwendungen rund um das Standardgehirn von \dm\ erleichtert. Durch das Bereitstellen robuster und erweiterbarer Werkzeuge zum Verarbeiten neuroanatomischer Datensätze ermöglicht es ein strukturiertes Abarbeiten der häufig benötigten Vorgänge im Zusammenhang mit der Arbeit mit dem Standardgehirn. Neben der Einpassung neuer Daten in das Standardgehirn kann dieses Protokoll auch dazu verwendet werden, sogenannte Durchschnittshirne zu erstellen; Aufnahmen mehrerer Hirne mit der gleichen zu zeigenden Eigenschaft können zu einem neuen Datensatz kombiniert werden, der die gemeinsamen Charakteristika hervorhebt. Einer der wichtigsten Schritte im Virtual Insect Protocol ist die Alignierung neuer Datensätze auf das Standardgehirn. Nachdem Methoden vorgestellt werden, die üblicherweise im biologischen oder medizinischen Umfeld angewendet werden, um Hirne aufeinander zu alignieren, wird evaluiert, inwiefern dieser Prozess automatisierbar ist. In der Folge werden diverse bildverarbeitende Methoden in dieser Hinsicht beurteilt. Es wird demonstriert, dass diese Verfahren den Anforderungen sinnvoller Alignierungen von Hirnen nicht genügen. Infolgedessen wird genauer analysiert, welche Umstände berücksichtigt werden müssen, um einen Algorithmus zu entwerfen, der diesen Anforderungen genügt. Im letzten Kapitel wird ein solcher Algorithmus mithilfe von Methoden aus der Informationstheorie hergeleitet, deren Verwendung das Verfahren auf eine solide mathematische Basis stellt. Es wird weiterhin gezeigt, wie Bayesische Inferenz angewendet werden kann, um die Ergebnisse darüber hinaus zu verbessern. Sodann wird demonstriert, daß dieser Algorithmus in stark verrauschten Bilddaten ohne zusätzliche Informationen Grenzen zwischen Strukturen erkennen kann, die mit den sichtbaren Grenzen gut übereinstimmen. Das Verfahren kann erweitert werden, um zusätzliche Informationen zu berücksichtigen, wie etwa die relative Position anatomischer Strukturen sowie deren Form. Es wird gezeigt, wie diese Erweiterung zur automatischen Segmentierung eines Hirnes verwendet werden kann

    ij_textbook1: Updates from 2015

    No full text
    Many corrections done during 2015 BIAS course are now reflected
    corecore